Skip to content

Conversation

@enigbe
Copy link
Contributor

@enigbe enigbe commented Dec 14, 2025

What this PR does

  • Adds a PostgreSQL Docker service for the Rust server, removing the need for a local PostgreSQL installation
  • Introduces a configurable maximum size limit for incoming request bodies, addressing a previously noted TODO
    • Operators can optionally set maximum_request_body_size in [server_config]
    • Defaults to 10 MB if not specified; capped at 20 MB

@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Dec 14, 2025

I've assigned @tankyleo as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

@enigbe enigbe force-pushed the 2025-12-bound-incoming-request-and-postgres-service branch from 4bfb403 to a163ae7 Compare December 14, 2025 22:45
use std::pin::Pin;
use std::sync::Arc;

const MAXIMUM_REQUEST_BODY_SIZE: u16 = 65_535;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems very conservative, given that monitors could get quite large. Given that the VSS service is actually a storage service, it also might make sense to make this configurable (in contrast to lightningdevkit/ldk-server#80, but even there we set the limit to 10MB).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned elsewhere: I guess a static upper bound is a good first step, but if we're really concerned about DoS we might need some dynamic rate limiting on a per-IP basis. Although then the question becomes how much of that should be considered the concern of the VSS service itself, and how much we'd just expect users to slap a load balancer/Cloudflare in front of the service to handle that for them

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've made the configuration changes suggested here, capping at 20 MB for the maximum size.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, as mentioned elsewhere, individual values might be quite a bit larger than 10 or 20MB. I think something along the lines of 100MB would be a more reasonable maximum, though per request it raises the question of how much a DoS protection this actually is.

@enigbe enigbe force-pushed the 2025-12-bound-incoming-request-and-postgres-service branch from 5d391cc to fbdd957 Compare December 16, 2025 08:36
@ldk-reviews-bot
Copy link

🔔 1st Reminder

Hey @tankyleo! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 2nd Reminder

Hey @tankyleo! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 3rd Reminder

Hey @tankyleo! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 4th Reminder

Hey @tankyleo! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 5th Reminder

Hey @tankyleo! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 6th Reminder

Hey @tankyleo! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 7th Reminder

Hey @tankyleo! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 8th Reminder

Hey @tankyleo! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 9th Reminder

Hey @tankyleo! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

Copy link
Contributor

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also add some test coverage to ensure the database / backends actually support the configured maximum values, i.e., that we're not otherwise limited somehow?

use std::sync::Arc;

const MAXIMUM_REQUEST_BODY_SIZE: usize = 20 * 1024 * 1024;
const DEFAULT_REQUEST_BODY_SIZE: usize = 10 * 1024 * 1024;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not quite sure why we have two values now? Why have DEFAULT and a separate MAXIMUM?

use std::pin::Pin;
use std::sync::Arc;

const MAXIMUM_REQUEST_BODY_SIZE: u16 = 65_535;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, as mentioned elsewhere, individual values might be quite a bit larger than 10 or 20MB. I think something along the lines of 100MB would be a more reasonable maximum, though per request it raises the question of how much a DoS protection this actually is.

use std::pin::Pin;
use std::sync::Arc;

const MAXIMUM_REQUEST_BODY_SIZE: usize = 20 * 1024 * 1024;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10/20MiB is quite small. We generally expect VSS clients to be edge-nodes storing small payment histories, but if someone does use it as a routing node its easy for monitors to get into the 100s of MiB. Postgres' limit for non-"large object" storage is 1GiB, so that probably seems reasonable as the hard max request size (though we should maybe consider supporting large objects for larger monitor storage).

@ldk-reviews-bot
Copy link

🔔 10th Reminder

Hey @tankyleo! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 11th Reminder

Hey @tankyleo! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

Verifies that storage backends can handle the configured
maximum value size (~1 GB). Also increases
MAXIMUM_REQUEST_BODY_SIZE to 1 GB to align
server-side validation with storage capacity.
@enigbe enigbe force-pushed the 2025-12-bound-incoming-request-and-postgres-service branch from 5a264a6 to 97b6b25 Compare January 10, 2026 17:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants